Uniform Convergence, Stability and Learnability for Ranking Problems
نویسندگان
چکیده
Most studies were devoted to the design of efficient algorithms and the evaluation and application on diverse ranking problems, whereas few work has been paid to the theoretical studies on ranking learnability. In this paper, we study the relation between uniform convergence, stability and learnability of ranking. In contrast to supervised learning where the learnability is equivalent to uniform convergence, we show that the ranking uniform convergence is sufficient but not necessary for ranking learnability with AERM, and we further present a sufficient condition for ranking uniform convergence with respect to bipartite ranking loss. Considering the ranking uniform convergence being unnecessary for ranking learnability, we prove that the ranking average stability is a necessary and sufficient condition for ranking learnability.
منابع مشابه
Learnability, Stability and Uniform Convergence
The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization....
متن کاملLearnability and Stability in the General Learning Setting
We establish that stability is necessary and sufficient for learning, even in the General Learning Setting where uniform convergence conditions are not necessary for learning, and where learning might only be possible with a non-ERM learning rule. This goes beyond previous work on the relationship between stability and learnability, which focused on supervised classification and regression, whe...
متن کاملStability Conditions for Online Learnability
Stability is a general notion that quantifies the sensitivity of a learning algorithm’s output to small change in the training dataset (e.g. deletion or replacement of a single training sample). Such conditions have recently been shown to be more powerful to characterize learnability in the general learning setting under i.i.d. samples where uniform convergence is not necessary for learnability...
متن کاملStochastic Convex Optimization
For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic conv...
متن کاملLearning From An Optimization Viewpoint
Optimization has always played a central role in machine learning and advances in the field of optimization and mathematical programming have greatly influenced machine learning models. However the connection between optimization and learning is much deeper : one can phrase statistical and online learning problems directly as corresponding optimization problems. In this dissertation I take this...
متن کامل